hello
hello
Labels

📌S Retain class distribution for seed 4:
Class 0: 4500
Class 1: 4500
Class 2: 4500
Class 3: 4500
Class 4: 4500
Class 5: 4500
Class 6: 4500
Class 7: 4500
Class 8: 4500
Class 9: 4500

📌S Forget class distribution for seed 4:
Class 0: 500
Class 1: 500
Class 2: 500
Class 3: 500
Class 4: 500
Class 5: 500
Class 6: 500
Class 7: 500
Class 8: 500
Class 9: 500
78090990
hello
hello
⚠️ Warning: Retain train loader may not be shuffled.
Training Epoch: 1 [256/45000]	Loss: 2.3366	LR: 0.000000
Training Epoch: 1 [512/45000]	Loss: 2.3093	LR: 0.000568
Training Epoch: 1 [768/45000]	Loss: 2.3102	LR: 0.001136
Training Epoch: 1 [1024/45000]	Loss: 2.3236	LR: 0.001705
Training Epoch: 1 [1280/45000]	Loss: 2.2652	LR: 0.002273
Training Epoch: 1 [1536/45000]	Loss: 2.2488	LR: 0.002841
Training Epoch: 1 [1792/45000]	Loss: 2.2376	LR: 0.003409
Training Epoch: 1 [2048/45000]	Loss: 2.2594	LR: 0.003977
Training Epoch: 1 [2304/45000]	Loss: 2.1838	LR: 0.004545
Training Epoch: 1 [2560/45000]	Loss: 2.2280	LR: 0.005114
Training Epoch: 1 [2816/45000]	Loss: 2.0730	LR: 0.005682
Training Epoch: 1 [3072/45000]	Loss: 2.0879	LR: 0.006250
Training Epoch: 1 [3328/45000]	Loss: 2.0571	LR: 0.006818
Training Epoch: 1 [3584/45000]	Loss: 2.0624	LR: 0.007386
Training Epoch: 1 [3840/45000]	Loss: 1.9746	LR: 0.007955
Training Epoch: 1 [4096/45000]	Loss: 1.8945	LR: 0.008523
Training Epoch: 1 [4352/45000]	Loss: 2.0039	LR: 0.009091
Training Epoch: 1 [4608/45000]	Loss: 1.8843	LR: 0.009659
Training Epoch: 1 [4864/45000]	Loss: 1.8870	LR: 0.010227
Training Epoch: 1 [5120/45000]	Loss: 1.9269	LR: 0.010795
Training Epoch: 1 [5376/45000]	Loss: 1.8730	LR: 0.011364
Training Epoch: 1 [5632/45000]	Loss: 1.7782	LR: 0.011932
Training Epoch: 1 [5888/45000]	Loss: 1.7290	LR: 0.012500
Training Epoch: 1 [6144/45000]	Loss: 1.6180	LR: 0.013068
Training Epoch: 1 [6400/45000]	Loss: 1.7007	LR: 0.013636
Training Epoch: 1 [6656/45000]	Loss: 1.8075	LR: 0.014205
Training Epoch: 1 [6912/45000]	Loss: 1.7807	LR: 0.014773
Training Epoch: 1 [7168/45000]	Loss: 1.8043	LR: 0.015341
Training Epoch: 1 [7424/45000]	Loss: 1.8451	LR: 0.015909
Training Epoch: 1 [7680/45000]	Loss: 1.6973	LR: 0.016477
Training Epoch: 1 [7936/45000]	Loss: 1.7092	LR: 0.017045
Training Epoch: 1 [8192/45000]	Loss: 1.7097	LR: 0.017614
Training Epoch: 1 [8448/45000]	Loss: 1.6484	LR: 0.018182
Training Epoch: 1 [8704/45000]	Loss: 1.6100	LR: 0.018750
Training Epoch: 1 [8960/45000]	Loss: 1.6632	LR: 0.019318
Training Epoch: 1 [9216/45000]	Loss: 1.5879	LR: 0.019886
Training Epoch: 1 [9472/45000]	Loss: 1.6492	LR: 0.020455
Training Epoch: 1 [9728/45000]	Loss: 1.6428	LR: 0.021023
Training Epoch: 1 [9984/45000]	Loss: 1.6676	LR: 0.021591
Training Epoch: 1 [10240/45000]	Loss: 1.6871	LR: 0.022159
Training Epoch: 1 [10496/45000]	Loss: 1.6255	LR: 0.022727
Training Epoch: 1 [10752/45000]	Loss: 1.8012	LR: 0.023295
Training Epoch: 1 [11008/45000]	Loss: 1.8146	LR: 0.023864
Training Epoch: 1 [11264/45000]	Loss: 1.6671	LR: 0.024432
Training Epoch: 1 [11520/45000]	Loss: 1.5547	LR: 0.025000
Training Epoch: 1 [11776/45000]	Loss: 1.5310	LR: 0.025568
Training Epoch: 1 [12032/45000]	Loss: 1.6045	LR: 0.026136
Training Epoch: 1 [12288/45000]	Loss: 1.4889	LR: 0.026705
Training Epoch: 1 [12544/45000]	Loss: 1.5508	LR: 0.027273
Training Epoch: 1 [12800/45000]	Loss: 1.5450	LR: 0.027841
Training Epoch: 1 [13056/45000]	Loss: 1.6512	LR: 0.028409
Training Epoch: 1 [13312/45000]	Loss: 1.5898	LR: 0.028977
Training Epoch: 1 [13568/45000]	Loss: 1.6086	LR: 0.029545
Training Epoch: 1 [13824/45000]	Loss: 1.5785	LR: 0.030114
Training Epoch: 1 [14080/45000]	Loss: 1.5853	LR: 0.030682
Training Epoch: 1 [14336/45000]	Loss: 1.5406	LR: 0.031250
Training Epoch: 1 [14592/45000]	Loss: 1.4685	LR: 0.031818
Training Epoch: 1 [14848/45000]	Loss: 1.6510	LR: 0.032386
Training Epoch: 1 [15104/45000]	Loss: 1.5017	LR: 0.032955
Training Epoch: 1 [15360/45000]	Loss: 1.6712	LR: 0.033523
Training Epoch: 1 [15616/45000]	Loss: 1.5374	LR: 0.034091
Training Epoch: 1 [15872/45000]	Loss: 1.5346	LR: 0.034659
Training Epoch: 1 [16128/45000]	Loss: 1.5266	LR: 0.035227
Training Epoch: 1 [16384/45000]	Loss: 1.5709	LR: 0.035795
Training Epoch: 1 [16640/45000]	Loss: 1.5493	LR: 0.036364
Training Epoch: 1 [16896/45000]	Loss: 1.6896	LR: 0.036932
Training Epoch: 1 [17152/45000]	Loss: 1.5683	LR: 0.037500
Training Epoch: 1 [17408/45000]	Loss: 1.6030	LR: 0.038068
Training Epoch: 1 [17664/45000]	Loss: 1.5947	LR: 0.038636
Training Epoch: 1 [17920/45000]	Loss: 1.5839	LR: 0.039205
Training Epoch: 1 [18176/45000]	Loss: 1.3980	LR: 0.039773
Training Epoch: 1 [18432/45000]	Loss: 1.7845	LR: 0.040341
Training Epoch: 1 [18688/45000]	Loss: 1.5118	LR: 0.040909
Training Epoch: 1 [18944/45000]	Loss: 1.7037	LR: 0.041477
Training Epoch: 1 [19200/45000]	Loss: 1.5636	LR: 0.042045
Training Epoch: 1 [19456/45000]	Loss: 1.7174	LR: 0.042614
Training Epoch: 1 [19712/45000]	Loss: 1.6136	LR: 0.043182
Training Epoch: 1 [19968/45000]	Loss: 1.6803	LR: 0.043750
Training Epoch: 1 [20224/45000]	Loss: 1.4445	LR: 0.044318
Training Epoch: 1 [20480/45000]	Loss: 1.6035	LR: 0.044886
Training Epoch: 1 [20736/45000]	Loss: 1.6658	LR: 0.045455
Training Epoch: 1 [20992/45000]	Loss: 1.5712	LR: 0.046023
Training Epoch: 1 [21248/45000]	Loss: 1.6597	LR: 0.046591
Training Epoch: 1 [21504/45000]	Loss: 1.6050	LR: 0.047159
Training Epoch: 1 [21760/45000]	Loss: 1.5567	LR: 0.047727
Training Epoch: 1 [22016/45000]	Loss: 1.4682	LR: 0.048295
Training Epoch: 1 [22272/45000]	Loss: 1.5951	LR: 0.048864
Training Epoch: 1 [22528/45000]	Loss: 1.5014	LR: 0.049432
Training Epoch: 1 [22784/45000]	Loss: 1.5450	LR: 0.050000
Training Epoch: 1 [23040/45000]	Loss: 1.5245	LR: 0.050568
Training Epoch: 1 [23296/45000]	Loss: 1.5291	LR: 0.051136
Training Epoch: 1 [23552/45000]	Loss: 1.3835	LR: 0.051705
Training Epoch: 1 [23808/45000]	Loss: 1.4487	LR: 0.052273
Training Epoch: 1 [24064/45000]	Loss: 1.4874	LR: 0.052841
Training Epoch: 1 [24320/45000]	Loss: 1.4811	LR: 0.053409
Training Epoch: 1 [24576/45000]	Loss: 1.4910	LR: 0.053977
Training Epoch: 1 [24832/45000]	Loss: 1.5105	LR: 0.054545
Training Epoch: 1 [25088/45000]	Loss: 1.5702	LR: 0.055114
Training Epoch: 1 [25344/45000]	Loss: 1.4912	LR: 0.055682
Training Epoch: 1 [25600/45000]	Loss: 1.4261	LR: 0.056250
Training Epoch: 1 [25856/45000]	Loss: 1.4767	LR: 0.056818
Training Epoch: 1 [26112/45000]	Loss: 1.4021	LR: 0.057386
Training Epoch: 1 [26368/45000]	Loss: 1.5199	LR: 0.057955
Training Epoch: 1 [26624/45000]	Loss: 1.4027	LR: 0.058523
Training Epoch: 1 [26880/45000]	Loss: 1.4500	LR: 0.059091
Training Epoch: 1 [27136/45000]	Loss: 1.4347	LR: 0.059659
Training Epoch: 1 [27392/45000]	Loss: 1.4237	LR: 0.060227
Training Epoch: 1 [27648/45000]	Loss: 1.4974	LR: 0.060795
Training Epoch: 1 [27904/45000]	Loss: 1.7077	LR: 0.061364
Training Epoch: 1 [28160/45000]	Loss: 1.3612	LR: 0.061932
Training Epoch: 1 [28416/45000]	Loss: 1.5152	LR: 0.062500
Training Epoch: 1 [28672/45000]	Loss: 1.5195	LR: 0.063068
Training Epoch: 1 [28928/45000]	Loss: 1.6563	LR: 0.063636
Training Epoch: 1 [29184/45000]	Loss: 1.3258	LR: 0.064205
Training Epoch: 1 [29440/45000]	Loss: 1.8182	LR: 0.064773
Training Epoch: 1 [29696/45000]	Loss: 1.4468	LR: 0.065341
Training Epoch: 1 [29952/45000]	Loss: 1.4443	LR: 0.065909
Training Epoch: 1 [30208/45000]	Loss: 1.3752	LR: 0.066477
Training Epoch: 1 [30464/45000]	Loss: 1.3006	LR: 0.067045
Training Epoch: 1 [30720/45000]	Loss: 1.3285	LR: 0.067614
Training Epoch: 1 [30976/45000]	Loss: 1.3101	LR: 0.068182
Training Epoch: 1 [31232/45000]	Loss: 1.2608	LR: 0.068750
Training Epoch: 1 [31488/45000]	Loss: 1.1628	LR: 0.069318
Training Epoch: 1 [31744/45000]	Loss: 1.3103	LR: 0.069886
Training Epoch: 1 [32000/45000]	Loss: 1.2677	LR: 0.070455
Training Epoch: 1 [32256/45000]	Loss: 1.3304	LR: 0.071023
Training Epoch: 1 [32512/45000]	Loss: 1.4226	LR: 0.071591
Training Epoch: 1 [32768/45000]	Loss: 1.4113	LR: 0.072159
Training Epoch: 1 [33024/45000]	Loss: 1.3324	LR: 0.072727
Training Epoch: 1 [33280/45000]	Loss: 1.3739	LR: 0.073295
Training Epoch: 1 [33536/45000]	Loss: 1.3894	LR: 0.073864
Training Epoch: 1 [33792/45000]	Loss: 1.4504	LR: 0.074432
Training Epoch: 1 [34048/45000]	Loss: 1.3016	LR: 0.075000
Training Epoch: 1 [34304/45000]	Loss: 1.2864	LR: 0.075568
Training Epoch: 1 [34560/45000]	Loss: 1.4536	LR: 0.076136
Training Epoch: 1 [34816/45000]	Loss: 1.4280	LR: 0.076705
Training Epoch: 1 [35072/45000]	Loss: 1.4215	LR: 0.077273
Training Epoch: 1 [35328/45000]	Loss: 1.2636	LR: 0.077841
Training Epoch: 1 [35584/45000]	Loss: 1.4169	LR: 0.078409
Training Epoch: 1 [35840/45000]	Loss: 1.2677	LR: 0.078977
Training Epoch: 1 [36096/45000]	Loss: 1.4113	LR: 0.079545
Training Epoch: 1 [36352/45000]	Loss: 1.3813	LR: 0.080114
Training Epoch: 1 [36608/45000]	Loss: 1.2788	LR: 0.080682
Training Epoch: 1 [36864/45000]	Loss: 1.2919	LR: 0.081250
Training Epoch: 1 [37120/45000]	Loss: 1.3071	LR: 0.081818
Training Epoch: 1 [37376/45000]	Loss: 1.3127	LR: 0.082386
Training Epoch: 1 [37632/45000]	Loss: 1.1552	LR: 0.082955
Training Epoch: 1 [37888/45000]	Loss: 1.2359	LR: 0.083523
Training Epoch: 1 [38144/45000]	Loss: 1.1870	LR: 0.084091
Training Epoch: 1 [38400/45000]	Loss: 1.3387	LR: 0.084659
Training Epoch: 1 [38656/45000]	Loss: 1.2927	LR: 0.085227
Training Epoch: 1 [38912/45000]	Loss: 1.3122	LR: 0.085795
Training Epoch: 1 [39168/45000]	Loss: 1.2567	LR: 0.086364
Training Epoch: 1 [39424/45000]	Loss: 1.2856	LR: 0.086932
Training Epoch: 1 [39680/45000]	Loss: 1.2845	LR: 0.087500
Training Epoch: 1 [39936/45000]	Loss: 1.1055	LR: 0.088068
Training Epoch: 1 [40192/45000]	Loss: 1.4341	LR: 0.088636
Training Epoch: 1 [40448/45000]	Loss: 1.2627	LR: 0.089205
Training Epoch: 1 [40704/45000]	Loss: 1.2055	LR: 0.089773
Training Epoch: 1 [40960/45000]	Loss: 1.1508	LR: 0.090341
Training Epoch: 1 [41216/45000]	Loss: 1.3081	LR: 0.090909
Training Epoch: 1 [41472/45000]	Loss: 1.2708	LR: 0.091477
Training Epoch: 1 [41728/45000]	Loss: 1.2432	LR: 0.092045
Training Epoch: 1 [41984/45000]	Loss: 1.3466	LR: 0.092614
Training Epoch: 1 [42240/45000]	Loss: 1.2053	LR: 0.093182
Training Epoch: 1 [42496/45000]	Loss: 1.1394	LR: 0.093750
Training Epoch: 1 [42752/45000]	Loss: 1.1326	LR: 0.094318
Training Epoch: 1 [43008/45000]	Loss: 1.1776	LR: 0.094886
Training Epoch: 1 [43264/45000]	Loss: 1.0849	LR: 0.095455
Training Epoch: 1 [43520/45000]	Loss: 1.0315	LR: 0.096023
Training Epoch: 1 [43776/45000]	Loss: 1.0454	LR: 0.096591
Training Epoch: 1 [44032/45000]	Loss: 1.2037	LR: 0.097159
Training Epoch: 1 [44288/45000]	Loss: 1.0936	LR: 0.097727
Training Epoch: 1 [44544/45000]	Loss: 1.1261	LR: 0.098295
Training Epoch: 1 [44800/45000]	Loss: 1.2639	LR: 0.098864
Training Epoch: 1 [45000/45000]	Loss: 1.0222	LR: 0.099432
Epoch 1 - Average Train Loss: 1.5435, Train Accuracy: 0.4394
Epoch 1 training time consumed: 17.97s
Evaluating Network.....
Test set: Epoch: 1, Average loss: 0.0073, Accuracy: 0.4907, Time consumed:0.98s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_02_August_2025_00h_19m_19s/ResNet18-Cifar10-seed4-ret100-1-best.pth
Valid (Test) Dl:  10000
Train Dl:  50000
Retain Train Dl:  45000
Forget Train Dl:  5000
Retain Valid Dl:  45000
Forget Valid Dl:  5000
retain_prob Distribution: 10000 samples
test_prob Distribution: 10000 samples
forget_prob Distribution: 5000 samples
Set1 Distribution: 5000 samples
Set2 Distribution: 5000 samples
Set1 Distribution: 5000 samples
Set2 Distribution: 5000 samples
Set1 Distribution: 10000 samples
Set2 Distribution: 10000 samples
Set1 Distribution: 10000 samples
Set2 Distribution: 10000 samples
Test Accuracy: 49.091796875
Retain Accuracy: 49.82430648803711
Zero-Retain Forget (ZRF): 0.8837631344795227
Membership Inference Attack (MIA): 0.4428
Forget vs Retain Membership Inference Attack (MIA): 0.532
Forget vs Test Membership Inference Attack (MIA): 0.4965
Test vs Retain Membership Inference Attack (MIA): 0.57275
Train vs Test Membership Inference Attack (MIA): 0.50125
Forget Set Accuracy (Df): 50.42509078979492
Method Execution Time: 953.16 seconds
